- Loading data onto the Big Data platform;
- Targeted preparation of data for specific use cases and creation of data profiles;
- Development of operational data pipelines;
- Coordination of the service provider's activities (only complex).
- More than 3 years of experience;
- English (Advanced/Fluent);
- Confident with the programming languages Apache Spark, Python, R, Scala, Java and Kotlin
- Knowledge of Kafka (Must have), Confluent Connectors;
- Knowledge of Redhat Linux (Must have);
- Understanding in AWS Cloud Ingestion Solutions (AWS Glue, AWS Kineses)(Must have);
- Understanding of efficient data processing and storage in AWS S3 and data banks (on-prem and cloud) (Must have);
- Knowledge of relevant source systems and their data types and structures;
- Knowledge in the implementation of batch and real-time interfaces;
- Extended expertise in the field of software engineering for the series production of productive data products (incl. testing, deployment, code quality logging, monitoring, operation);
- Experience in the development of distributed systems;
- Experience in Docker-based deployment and resource managers (Kubernets);
- Knowledge of Gitlab (Must have) or Jenkins;
- Experience in CI/CD pipelines and tools (AWS Data Pipelines) (Must have);
- Knowledge and practical experience with Workflow Managers (Airflow);
- Detailed knowledge of agile software development according to Scrum;
- Automotive industry knowledge.
Company
Location
Lisbon - Portugal
Job type
Full-Time
Python Job Details
SKILLS
Must have:
Amazon Web Services
Apache Kafka
Other Required:
Apache Spark
Kotlin
Java
Python
DESCRIPTION
Your role:
REQUIREMENTS
Your profile:
More Developer Job Boards
Fullstack Developer Jobs Golang Jobs JavaScript Jobs Python Jobs React Jobs Rust Jobs Java Jobs